Precognition Lab @ HKUST (Guangzhou)
Our research lab, the Precognition Lab (智能感知与预测实验室), is interested in building human-level Embodied AI systems that can effectively perceive, reason and interact with the real world for the good of humans. Here is an up-to-date research roadmap.
Our lab's computing resources include 36 RTX 3090/4090/L40 GPUs and a cluster of 24 A6000 GPUs with a 100TB NAS. See this post. And we have multiple mobile platforms with robot arms and dex hands:


Check out our lab's cool publications and demos.
Media Coverage
News
Lab Members
Publications
* indicates corresponding authors.
  1. GLOVER++: Unleashing the Potential of Affordance Learning from Human Behaviors for Robotic Manipulation
    Teli Ma, Jia Zheng, Zifan Wang, Ziyao Gao, Jiaming Zhou, Junwei Liang*
    CoRL 2025
  2. Omni-Perception: Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments
    Zifan Wang, Teli Ma, Yufei Jia, Xun Yang, Jiaming Zhou, Wenlong Ouyang, Qiang Zhang, Junwei Liang*
    CoRL 2025 (Oral, ~6% acceptance rate)
  3. Mitigating the Human-Robot Domain Discrepancy in Visual Pre-training for Robotic Manipulation
    Jiaming Zhou, Teli Ma, Kun-Yu Lin, Ronghe Qiu, Zifan Wang, Junwei Liang*
    CVPR 2025
  4. SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
    Rong Li, Shijie Li, Lingdong Kong, Xulei Yang, Junwei Liang*
    CVPR 2025
  5. From Cognition to Precognition: A Future-Aware Framework for Social Navigation
    Zeying Gong, Tianshuai Hu, Ronghe Qiu, Junwei Liang*
    ICRA 2025
  6. GaussianProperty: Integrating Physical Properties to 3D Gaussians with LMMs
    Xinli Xu, Wenhang Ge, Dicong Qiu, ZhiFei Chen, Dongyu Yan, Zhuoyun Liu, Haoyu Zhao, Hanfeng Zhao, Shunsi Zhang, Junwei Liang*, Ying-Cong Chen*
    ICCV 2025
  7. Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation
    Teli Ma, Jiaming Zhou, Zifan Wang, Ronghe Qiu, Junwei Liang*
    CoRL 2024
  8. Prioritized Semantic Learning for Zero-shot Instance Navigation
    Xinyu Sun, Lizhao Liu, Hongyan Zhi, Ronghe Qiu, Junwei Liang*
    ECCV 2024
  9. Dragtraffic: Interactive and Controllable Traffic Scene Generation for Autonomous Driving
    Sheng WANG, Ge SUN, Fulong MA, Tianshuai HU, Qiang QIN, Yongkang SONG, Lei ZHU, Junwei Liang*
    IROS 2024
  10. Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models
    Xiaoyu Zhu, Hao Zhou, Pengfei Xing, Long Zhao, Hao Xu, Junwei Liang, Alexander Hauptmann, Ting Liu, Andrew Gallagher
    ECCV 2024
  11. An Examination of the Compositionality of Large Generative Vision-Language Models
    Teli Ma, Rong Li, Junwei Liang*
    NAACL 2024

  12. Preprint.
  13. Exploring the Limits of Vision-Language-Action Manipulations in Cross-task Generalization
    Jiaming Zhou, Ke Ye, Jiayi Liu, Teli Ma, Zifan Wang, Ronghe Qiu, Kun-Yu Lin, Zhilin Zhao, Junwei Liang*
  14. Stairway to Success: Zero-Shot Floor-Aware Object-Goal Navigation via LLM-Driven Coarse-to-Fine Exploration
    Zeying Gong, Rong Li, Tianshuai Hu, Ronghe Qiu, Lingdong Kong, Lingfeng Zhang, Yiyi Ding, Leying Zhang, Junwei Liang*
  15. GLOVER: Generalizable Open-Vocabulary Affordance Reasoning for Task-Oriented Grasping
    Teli Ma, Zifan Wang, Jiaming Zhou, Mengmeng Wang, Junwei Liang*
  16. Open-vocabulary Mobile Manipulation in Unseen Dynamic Environments with 3D Semantic Maps
    Dicong Qiu, Wenzong Ma, Zhenfu Pan, Hui Xiong, Junwei Liang*